Back

American Journal of Infection Control

Elsevier BV

Preprints posted in the last 90 days, ranked by how well they match American Journal of Infection Control's content profile, based on 12 papers previously published here. The average preprint has a 0.04% match score for this journal, so anything above that is already an above-average fit.

1
Bayesian generative modeling for heterogeneous wastewater data applied to COVID-19 forecasting

Johnson, K. E.; Vega Yon, G.; Brand, S. P. C.; Bernal Zelaya, C.; Bayer, D.; Volkov, I.; Susswein, Z.; Magee, A.; Gostic, K. M.; English, K. M.; Ghinai, I.; Hamlet, A.; Olesen, S. W.; Pulliam, J.; Abbott, S.; Morris, D. H.

2026-02-24 infectious diseases 10.64898/2026.02.23.26346887
Top 0.3%
33× avg
Show abstract

Infectious disease forecasts can inform public health decision-making. Wastewater monitoring is a relatively new epidemiological data source with multiple potential applications, including forecasting. Incorporating wastewater data into epidemiological forecasting models is challenging, and relatively few studies have assessed whether this improves forecast performance. We present and evaluate a semi-mechanistic wastewater-informed forecasting model. The model forecasts COVID-19 hospital admissions at the state and territorial levels in the United States, based on incident hospital admissions data and, optionally, SARS-CoV-2 wastewater concentration data from multiple wastewater sampling sites. From February through April 2024, we produced real-time wastewater-informed COVID-19 forecasts using development versions of the model and submitted them to the United States COVID-19 Forecast Hub ("the Hub"). We then published an open-source R package, wwinference, that implements the model with or without wastewater as an input. Using proper scoring rules and measures of model calibration, we assess both our real-time submissions to the Hub and retrospective hypothetical forecasts from wwinference made with and without wastewater data. While the models performed similarly with and without the wastewater signal included, there was substantial heterogeneity for individual locations and dates where wastewater data meaningfully improved or degraded the models forecast performance. Compared to other models submitted to the Hub during the period spanned by our submissions, the real-time wastewater-informed version of our model ranked fourth of 10 models, with the hospital admissions-only version of our model ranking second out of 10 models. Across the 2023-2024 winter epidemic wave, retrospective forecasts from wwinference would have performed similarly with and without the wastewater signal included: fifth and fourth out of 10 models, respectively. To better understand the drivers of differential forecast performance with and without wastewater, we performed an exploratory analysis investigating the relationship between characteristics of the input data and improved and reduced performance in our model. Based on that analysis, we identify and discuss key areas for further model development. To our knowledge, this is the first work that conducts an evaluation of real-time and retrospective infectious disease forecasts across the United States both with and without wastewater data and compared to other forecasting models. Author SummaryWastewater-based epidemiology, in combination with clinical surveillance, has the potential to improve situational awareness and inform outbreak responses. We developed a model that uses data on the pathogen concentration in wastewater from one or more wastewater treatment plants in combination with hospital admissions to produce short-term forecasts of hospital admissions. We produced and submitted forecasts of 28-day ahead COVID-19 hospital admissions from this model to the U.S. COVID-19 Forecast Hub during the spring of 2024 and found that it performed well in comparison to other models during that limited time period. To assess the added value of incorporating wastewater data into the model and to investigate how it would have performed had we submitted it during the entire 2023-2024 winter epidemic wave, we performed a retrospective analysis in which we produced forecasts from the model with and without including wastewater data, using data that would have been available in real-time as of each forecast date. Both versions of the model would have been median overall performers had they been submitted to the Hub throughout the season. When comparing the models performance with and without wastewater data included, we found that overall forecast performance was very similar, with wastewater data slightly reducing overall average forecast performance. Within this result, there was significant heterogeneity, with clear instances of wastewater data improving and detracting from forecast performance. We used trends in the observed data to generate hypotheses as to the drivers of improved and reduced relative forecast performance within our model. We conclude by suggesting future work to improve the model and more broadly the application of wastewater-based epidemiology to forecasting.

2
The SMART-AI trial: Real-time cholangioscopy artificial intelligence for the classification of biliary strictures

Marya, N.; Powers, P.; Marcello, M.; Rau, P.; Nasser-Ghodsi, N.; Marshall, C.; Zivny, J.; AbiMansour, J.; Chandrasekhara, V.

2025-12-27 gastroenterology 10.64898/2025.12.17.25342271
Top 0.6%
19× avg
Show abstract

BackgroundSampling techniques have poor accuracy for classifying biliary strictures as benign or malignant. Previously, a cholangioscopy artificial intelligence (AI) outperfromed sampling techniques based solely on analysis of previously recorded cholangioscopy footage. The aim of this trial was to compare the performance of a real-time cholangioscopy AI to both sampling techniques and human observers for the task of biliary stricture classification. MethodsA cholangioscopy AI computer connected directly to a cholangioscope console. The computer analyzed the cholangioscopy video stream during procedures for suspected biliary strictures. The primary outcome of the study was comparison of the performance of cholangioscopy AI to sampling techniques - brush cytology and transpapillary forceps biopsy - for biliary stricture classification. Secondary outcomes included comparison of the AI classification performance to that of human observers (separated into junior-level and experienced-level cohorts) who reviewed the cholangioscopy footage. ResultsA total of 41 patients were enrolled in the trial and had biliary strictures analyzed by cholangioscopy AI. For the classification of strictures, the AI had greater classification accuracy than standard sampling techniques (87.8% versus 67.4%; p = 0.043). Additionally, the cholangioscopy AI was significantly more accurate for biliary stricture classification than both junior-level (87.8% versus 61.5%; p = 0.001) and experienced endoscopists (87.8% versus 63.15%; p = 0.011). ConclusionsThis trial demonstrates that sampling techniques and human assessment of biliary strictures are flawed and there may be a benefit to the use of a cholangioscopy AI system to aid in biliary stricture classification.

3
Exploring Cancer in Colorado using a novel data platform: the ECCO experience

Lowery, J. T.; Alquaddoomi, F.; Rubinetti, V.; Burus, T.; Jardine, C. T.; Warren, A. C.; Walsh, J. M.; Borrayo, E. T.; Davis, S.

2026-02-04 epidemiology 10.64898/2026.02.03.26345489
Top 0.8%
17× avg
Show abstract

PurposeTo create a publicly available, interactive data platform to visualize various data measures reflecting Colorado and its residents to support research and outreach efforts, specifically focusing on cancer burden and disparities throughout the state. This platform, named ECCO (Exploring Cancer in Colorado), aims to integrate diverse public data sources into a unified, user-friendly interface, accessible to researchers, community members, and outreach programs alike. MethodsA multi-disciplinary team developed ECCO, leveraging public data sources like Cancer InFocus, State Cancer Profiles, and the Colorado Department of Public Health and Environment. The platforms architecture employs a three-tiered web application model, utilizing a PostgreSQL database, a backend API built with FastAPI, and a Vue 3 frontend with an Open Layers map. Data is organized geographically at the county and/or census tract levels, categorized into measure categories (e.g., socio-demographics, cancer risk factors), and further filterable by demographic characteristics. An automated Extract-Transform-Load (ETL) data pipeline ensures regular updates of the data. ResultsThe platform visualizes data such as socio-demographics, cancer risk factors, screening adherence, and cancer incidence and mortality rates. Additionally, ECCO incorporates location-specific data for cancer care facilities, health services, environmental exposures, and political boundaries. To date, ECCO has had 1.1K unique visitors and over 19K pageviews according to Google Analytics. ConclusionThe ECCO platform provides a valuable tool for understanding and addressing cancer disparities in Colorado. By integrating diverse data sources and offering interactive visualization, ECCO enhances the ability of researchers, community members, and outreach programs to identify populations at risk, inform interventions, and support research priorities. AvailabilityThe application and code are available at https://coe-ecco.org/ and https://github.com/colorado-cancer-center/ecco. CONTENT SUMMARYO_ST_ABSKey ObjectiveC_ST_ABSThis work sought to develop ECCO (Exploring Cancer in Colorado), an interactive, easily-accessible data platform designed to visualize and understand diverse cancer-related data measures reflective of Colorado and its residents. Knowledge generatedECCO integrates public data from sources like Cancer InFocus, State Cancer Profiles, and the Colorado Department of Public Health and Environment, visualizing measures such as socio-demographics, cancer risk factors, screening adherence, and cancer incidence and mortality rates at both county and census tract levels. The platform also incorporates location-specific data on cancer care facilities, health services, environmental exposures, and political boundaries.

4
Two-step deep-learning candidemia prediction model using two large time-sequence electronic health datasets

Yoshida, H.; Adelman, M. W.; Rasmy, L.; Ifiora, F.; Xie, Z.; Perez, M. A.; Guerra, F.; Yoshimura, H.; Jones, S. L.; Arias, C. A.; Zhi, D.; Nigo, M.

2026-03-04 infectious diseases 10.64898/2026.03.03.26347531
Top 0.8%
16× avg
Show abstract

BackgroundCandidemia is a rare but life-threatening bloodstream infection that remains difficult to predict using conventional risk stratification approaches, highlighting the need for improved predictive strategies. As a result, empiric antifungal therapy is often delayed even in high-risk patients. MethodsWe developed a deep learning model (PyTorch_EHR) to predict 7-day candidemia risk by using electronic health record data from two large cohorts (Houston Methodist Hospital System [HMHS] and MIMIC-IV), including adult inpatients who underwent at least one blood culture. Model performance was compared with logistic regression (LR), LightGBM, and established intensive care unit candidemia scores. We further implemented a two-step prediction framework integrating candidemia and 30-day mortality risk models to inform empiric antifungal decision-making. ResultsAmong 213,404 and 107,507 patients in the HMHS and MIMIC-IV cohorts, candidemia occurred in fewer than 1% (851 [0.4%] and 634 [0.6%], respectively). PyTorch_EHR outperformed LR, LightGBM, and existing candidemia scores, particularly in terms of area under the precision-recall curve (AUPRC) in HMHS and MIMIC-IV. By integrating 30-day mortality risk, the two-step framework identified an additional 20 and 28 candidemia cases beyond the one-step model, increasing coverage to 61% (121/199) and 46% (68/147) in HMHS and MIMIC-IV, respectively. Many patients identified by the two-step framework had high mortality yet did not receive empiric antifungal therapy (61.1% HMHS; 82.6% MIMIC-IV). ConclusionA two-step deep-learning framework integrating candidemia and mortality risk may support early identification of high-risk patients and facilitate timely empiric antifungal therapy. Prospective studies are warranted to confirm the findings.

5
Interpretable machine learning model for predicting kidney failure among CAKUT children in multicenter large-scale study

Liu, T.; Wang, H.; Liu, J.; Zhao, X.; Xia, Y.; Wang, X.; Kang, Y.; Liu, C.; Gao, X.; Jiang, X.; Mao, J.; Li, Y.; Zhang, A.; Wang, M.; Bai, H.; Shen, T.; Dang, X.; Wang, D.; Zhang, R.; Lu, Y.; Shen, Q.; Nie, S.; Chen, Y.; Xu, H.; Zhai, Y.

2026-02-10 nephrology 10.64898/2026.02.08.26345871
Top 0.9%
16× avg
Show abstract

Congenital anomalies of the kidney and urinary tract (CAKUT) are the leading cause of pediatric kidney failure, but predicting individual progression remains challenging. This multicenter study developed and validated POCC, a machine learning model for predicting kidney failure risk at 1, 3, and 5 years post-diagnosis in CAKUT patients. Two versions were created using data from 2,249 children. The general model achieved internal AUCs of 0.93-0.99 and external AUCs of 0.90-0.98 and 0.81- 0.90 in two independent validations at pediatric and general hospitals, respectively. The specialized model, integrating congenital-hereditary features, achieved internal AUCs of 0.93-0.99 and external AUCs of 0.91-0.96 in pediatric hospitals. Deployed online, POCC demonstrated 90.7% accuracy in real-world validation, with the specialized model reaching 100% sensitivity and specificity for 5-year predictions. As the first tool for multi-timepoint risk prediction across diverse CAKUT subphenotypes per patient, POCC has strong potential to support personalized management.

6
Mortality burden of outdoor occupational heat exposure in the United States

Shkembi, A.; Schinasi, L. H.; Payne-Sturges, D.; Neitzel, R. L.

2026-01-30 occupational and environmental health 10.64898/2026.01.29.26345131
Top 0.9%
16× avg
Show abstract

BackgroundOutdoor workers are particularly vulnerable to the adverse impacts of heat, but many studies focus on heat exposure in residential settings only. This leads to a limited understanding of the full mortality burden due to occupational heat exposures. Here, we aimed to improve estimates of the total, short-term mortality burden attributable to outdoor occupational heat exposure in the United States (US). MethodsWe developed a panel data set for 3,108 US counties during 2010-2019 by linking all-cause mortality among the working age population, derived from CDC WONDER, with the prevalence of workers exposed to outdoor occupational heat, which integrates data on wet bulb globe temperature, workplace activities, and employment counts. We developed a quasi-Poisson regression model adjusted for ambient temperature, total precipitation, and county and state-year fixed effects to estimate short-term excess deaths attributable to outdoor occupational heat exposure. FindingsNationwide, approximately 3.8% (95% CI: 2.5-5.8%) of all workers were annually exposed to dangerous wet-bulb globe temperatures. This outdoor occupational heat exposure resulted in approximately 9,800 (3,100-17,000) annual excess deaths in the working age population. An estimated 62% of excess deaths occurred in the most socially vulnerable counties despite accounting for 25% of workers. InterpretationThe mortality burden of occupational heat exposure is likely far larger than 39 officially reported annual deaths that the Bureau of Labor Statistics reports for this time period. The workplace should be an explicit focus of heat policies, advocacy, and adaptation measures. FundingUS Centers for Disease Control and Prevention/National Institute for Occupational Safety and Health.

7
Feasibility and Performance of Procalcitonin-guided antimicrobial stewardship during autologous stem cell transplantation

Pande, A.; Adaniya, S.; Clark, W.; Wilkinson, R.; Grazziutti, M.; Apewokin, S.

2025-12-16 infectious diseases 10.64898/2025.12.15.25340973
Top 0.9%
16× avg
Show abstract

BackgroundAntibiotic stewardship during stem cell transplantation (SCT) is challenging.. Procalcitonin (PCT) has been employed successfully in critical care patients to safely guide stewardship. However, procalcitonin guided stewardship has not been robustly assessed in SCT recipients. We sought to evaluate the potential utility of PCT to guide antimicrobial de-escalation during engraftment. Methods100 SCT patients were prospectively enrolled in a "strategy trial" and had infectious complications documented. Lab parameters - CBC, BMP, CRP were obtained daily as standard of care (SOC) while PCT was obtained for research purposes. Providers were blinded to PCT results. We compared duration of antimicrobial escalation between actual events (SOC model) and a proposed PCT model. In this hypothetical PCT model, antibiotic de-escalation would occur if CRP remained <100 mg/dl and PCT <0.25 ng/ml after 3 days of escalation. Escalation events were defined as a substitution or addition of an antimicrobial agent after initiation of prophylactic antimicrobials. Results77 patients had escalation events and of these, 33 had bacterial infections. A total of 136 antimicrobial escalations events were identified, and of these only 39(28.7%) were associated with documented infections. The standard of care model had a mean duration (+SD) of 9.08 (+ 6.08) antibiotic days. If the PCT model were employed, the mean duration (+SD) would be 4.44 (+ 6.16) days (p<0.001). The PCT model, however, would have missed 11 infections\ ConclusionProcalcitonin-guided antimicrobial stewardship during autologous stem cell transplantation is feasible however optimization is necessitated for utilization as a tool to guide antibiotic prophylaxis during SCT.

8
Research In Your Mailbox: Remote Blood Self-sampling Enables Participation of Underserved Populations in Longitudinal Studies

Stefanovic, F.; Robertson, I.; Moloney, K.; Edelson, J.; Nguyen, S.; Shinkawa, V.; Uchimura, K.; Lin, A.; Le, L.; Tokihiro, J. C.; Takezawa, M. G.; Phan, D.; Schiffer, J.; Boeckh, M.; Adams, K. N.; Waghmare, A.; Errett, N. A.; Berthier, E.; Lim, F. Y.; Theberge, A. B.

2026-02-06 infectious diseases 10.64898/2026.02.05.26345688
Top 1%
11× avg
Show abstract

Structured AbstractO_ST_ABSImportanceC_ST_ABSRemote sampling technologies are invaluable for protecting both participants and researchers when studying highly infectious diseases. When leveraged for longitudinal studies, remote sampling with transcriptomic readouts is a powerful tool for studying the host immune response. Additionally, remote study flexibility circumvents common barriers to research participation including length of commute, transportation, and scheduling, thereby expanding access to clinical research. ObjectiveIn this work, we investigate the effectiveness of a remote study model for reaching women from underrepresented, underserved, and underreported (U3) populations. We sought to recruit individuals who qualify as underrepresented in clinical research, who are located in rural areas, or who come from disadvantaged backgrounds per the NIH definition. DesignIn this longitudinal study, U3 women positive for COVID-19 were enrolled and followed over the course of 6 months. In the first month of the infection, participants (n = 40) self-collected a set of 5 nasal swabs, 5 homeRNA-stabilized blood samples, and 2 additional unstabilized blood samples at first and last sampling. Sampling time points were spaced 5 days apart, so that the total of the 5 time points was completed within 25 days. homeRNA is a platform for remote self-collection of blood samples with subsequent RNA stabilization. A subset of participants likely to develop post-acute sequelae of COVID-19 (PASC) and their age-matched controls were selected to self-collect an additional set of 5 nasal swabs and 5 homeRNA-stabilized blood samples during month 3 of study participation, with the same sampling frequency. All participants were resurveyed at months 4, 5, and 6 about their symptoms. Participants also completed surveys at each sampling and a more comprehensive survey about study experience after each set of 5 time points. SettingThis was a fully remote study with all sampling supplies and instructions shipped to the participants. Participants self-collected blood and nasal swabs at home and shipped these back to our lab for further processing. Surveys were administered electronically using REDCap. ParticipantsFor this study, we enrolled women who were 18 or older, met the NIH criteria for U3, and who had tested positive for SARS-CoV-2 within a week of enrollment. Further, we excluded protected populations including individuals who were pregnant and/or incarcerated. Of the 334 individuals who completed the screening process, 65 were invited into the study based on the eligibility criteria and balancing age, race/ethnicity, and state of residence to closely correspond to the demographics of the United States. Of the 65 invited individuals, 40 were fully enrolled in the study and 39 completed all study components. Main Outcomes and MeasuresPrior to the study, we proposed that the increased flexibility of a remote study design would allow for participation of populations underrepresented in clinical research. The primary measurements planned for this study consisted of usability data and general experience in a longitudinal study. These data were collected by self report using electronically administered surveys. The Consolidated Framework for Implementation Research (CFIR), a well-established implementation science framework, was used to guide the development of questions about usability and study experience. Results40 women were recruited from 19 states, with diverse racial backgrounds (62% White, 15% Black or African American, 10% Asian, 5% American Indian or Alaska Native, 5% Other, 3% More than one race), a mostly even age distribution (26% ages 20 - 29, 15% ages 30 - 39, 31% ages 40 - 49, 28% ages 50+), and most of whom (80%) are categorized as having a disadvantaged background per the NIH. Survey responses show high satisfaction with the study, where all participants who completed the study (100%, n = 39/39) indicating that they would be willing to participate in a similar study again, with most (n = 32/39) indicating a willingness to participate for up to 4 years with around 15 samples collected per year. We note that 4 years was the longest time period that participants were able to select in their surveys, suggesting that participants may be willing to participate for even longer periods. Most (>90%) either agreed or strongly agreed that all components of the kit were easy to use. Conclusions and RelevanceThe high retention (98%, n = 39/40) and satisfaction of participants in this study indicates the utility of a remote study design for longitudinal research. We also find that study topic, flexibility of study, and positive interactions with the study team are important factors for participant recruitment and retention. This work suggests that the increased flexibility of a fully remote design enables engagement of individuals who may otherwise be excluded from clinical research.

9
Grinning and bearing it - A mixed methods approach to explore animal-related injuries in UK and Irish Veterinary Students

Furtado, T.; Lois Kennedy, L.; Pinchbeck, G.; Tulloch, J. S. P.

2025-12-21 occupational and environmental health 10.64898/2025.12.19.25342672
Top 1%
11× avg
Show abstract

BackgroundWhile veterinary surgeons are known to have particularly high rates of injury compared to other sectors, little is known about rates of injury among veterinary students. This study aims to understand animal-related injury rates, injury context and mechanisms, attitudes to reporting injuries, and behaviour change among UK and Irish veterinary students. MethodsA survey was distributed to students across all veterinary schools operating in the UK and Ireland in 2021. Questions explored participants experience of injury through asking about their most recent and most severe injuries via quantitative and free-text questions. Data were analysed using descriptive statistics, logistic regression, and qualitative content analysis. Results533 responses were included in the analyses. Overall, 47.5% of students reported having been injured by an animal during the veterinary degree, 35.5% of students reported being injured within the last 12 months. Most recent injuries were caused by companion animals (38.0%), livestock (37.6%), and equids (23.5%). For their most severe injuries, 48.7% involved livestock, 28.7% companion animals, and 22.1% equids. The content analysis highlighted that students normalised injuries and infrequently reported injuries to the university. It was very rare for students to take time off from their studies or placements, due to course pressures. ConclusionsThese findings reflect concerningly high levels of injury, which are being under-reported and reflect a culture of injury acceptance and expectation among students. Veterinary schools should consider lessons learnt in other work environments which have been successful in changing safety culture.

10
Comparative Analysis of Biofilm Formation in Bacterial and Fungal Isolates from Contact Lens and Non-Contact Lens Associated Keratitis

ABRAHAM, K. S.; RAVI, S. S. S.; VAJRAVELU, L. K.

2026-02-09 infectious diseases 10.64898/2026.02.09.26345896
Top 1%
11× avg
Show abstract

Microbial keratitis is a sight-threatening corneal infection with varying etiological agents, primarily bacteria and fungi. Assessing and contrasting the virulence factors of microorganisms isolated from a non-contact lens-associated keratitis (NCLAK) and contact lens-associated keratitis (CLAK) is the goal of the current investigation. Samples were collected from over 60 patients and analysed using standard microbiological techniques, including culture, Gram staining, KOH mount, biochemical tests, antimicrobial susceptibility testing, and biofilm assays. The results demonstrated that CLAK isolates were predominantly bacterial, especially Pseudomonas aeruginosa, known for strong biofilm production and high multidrug resistance. In contrast, NCLAK showed a higher incidence of fungal infections, particularly Candida albicans. The results highlight the significance of early diagnosis, tailored and improved awareness regarding contact lens hygiene to prevent complications associated with keratitis.

11
The Effect of Occupational Integration on Musculoskeletal Injury in Female Marines in the Fleet: An Epidemiological Cohort Study

Fraser, J. J.; Zouris, J. M.; Hoch, J. M.; Sessoms, P. H.; MacGregor, A. J.; Hoch, M. C.

2026-02-23 occupational and environmental health 10.64898/2026.02.19.26346637
Top 1%
11× avg
Show abstract

IntroductionMusculoskeletal injuries (MSKIs) are ubiquitous in the U.S. military, especially among high-performing service members such as Marines. Given that female service members only started to be assigned to ground combat roles since December 2015, evaluation of sex on MSKI risk in ground combat occupations has not been possible until there was an ample population to study. The purpose of this population-level epidemiological study was to assess (1) if female sex was a salient risk factor for MSKI in Marines serving in different military occupations, including combat arms, and (2) the effects of integration period on MSKI risk among female Marines. Materials and MethodsA population-based epidemiological retrospective cohort study of all U.S. Marines was performed assessing female sex, occupation, and integration period on the prevalence of MSKI from 2011 through 2020. The Military Health System Data Repository was utilized to identify initial healthcare encounters for diagnosed ankle-foot, knee, lumbopelvic-hip, thoracocostal, cervicothoracic, shoulder, elbow, or wrist-hand complex injuries. Prevalence was calculated for female and male Marines in each occupational category (combat, combat support, aviators, aviation support, services) during the pre-integration (2011-2015) and post-integration (2016-2020) periods. ResultsDuring the pre-integration period, 520/1,000 female Marines (n=13,985) and 299/1,000 male Marines (n=142,158) incurred MSKIs. In the post-integration period, the prevalence increased to 565/1,000 female Marines (n=17,608) and 348/1,000 male Marines (n=161,429). In the multivariable evaluation of sex, occupation, integration period, and the interaction of sex and occupation on combined MSKIs, only female sex was a significant factor for injury (prevalence ratio [PR]=1.99), with service in ground combat and aviation occupations identified as protective factors when compared with services occupations (PR=0.69). When these same factors were evaluated for specific MSKI outcomes, female sex remained a robust factor in all lower quarter (PR=1.75-2.63) and upper quarter (PR=1.38-2.36) injuries except for shoulder injuries. Service in ground combat and aviation occupations was protective for all lower quarter injuries (PR=0.46-0.71). In the upper quarter, ground combat was protective for all injuries except for elbow injuries (PR=0.67-0.77). Serving as an aviator was a risk factor for cervicothoracic (PR=1.57) and thoracocostal (PR=1.22) injuries and a protective factor for shoulder (PR = 0.73) and wrist-hand (PR = 0.46) injuries. Adjusted risk for lumbopelvic-hip (PR=1.13), ankle-foot (PR=1.53), cervicothoracic (PR=1.19), thoracocostal (PR=1.14), and elbow (PR=1.48) injuries significantly increased during the post-integration period. There was a significant sex-by-period interaction for shoulder injuries alone, with female sex in the post-integration epoch found to be salient (PR=1.26). ConclusionsFemale sex was a salient factor for MSKI, with service in ground combat and aviation occupations identified as protective factors when compared with services occupations. In the evaluation of specific MSKIs, female sex remained a robust and significant factor in all lower quarter injuries and upper quarter injuries except for shoulder injuries. There was only a significant sex-by-period interaction for shoulder conditions, with an increased risk of these injuries in female Marines in the post-integration period.

12
The effects of Far-UVC irradiation on the presence and concentration of ESKAPEE pathogens on hospital surfaces: study protocol for a multi-site, double-blinded randomized controlled trial in La Paz, Bolivia

Saber, L. B.; Rojas, M.; Anderson, D. M.; Anderson, D. J.; Claus, H.; Cronk, R.; Linden, K. G.; Lott, M. E. J.; Radonovich, L. J.; Warren, B. G.; Williamson, R. D.; Vincent, R. L.; Gutierrez-Cortez, S.; Calderon Toledo, C.; Brown, J.

2026-02-05 occupational and environmental health 10.64898/2026.02.04.26345557
Top 1%
10× avg
Show abstract

Hospital-acquired infections are a known and growing problem worldwide. Far-UVC is a novel disinfection method that inactivates bacteria with limited penetration into human skin or eyes. A clustered, unmatched, randomized control trial (RCT) will be implemented in two Bolivian hospitals. The intervention arm will receive functioning Far-UVC lamps, whereas the control arm will receive identical lamps that do not emit UV light (shams). Based on baseline data, 40 lamp fixtures will be installed above hospital sinks, 10 per arm per hospital. Environmental samples (air and surface swabs) will be collected and analyzed via culture and sequencing. Simultaneously, air chemical monitoring data will be collected.

13
Reclaiming health: a qualitative, explorative study of long covid recovery journeys involving mind-body approaches.

Deurman, C.; Brinkman, V.; Slagboom, M.; Bussemaker, J.; Vos, H. M. M.

2026-02-23 infectious diseases 10.64898/2026.02.21.26345052
Top 1%
10× avg
Show abstract

ObjectiveThis study explored the recovery experiences of individuals who report having (largely) recovered from long covid and who attributed their improvement to mind-body approaches. Design, setting and participantsWe conducted an explorative qualitative study using purposive recruitment through social media and snowball sampling. Eighteen adult women (aged 37-62 years), who self-identified as having had long covid and having substantially recovered through mind-body approaches participated in semi-structured interviews. Data were analysed using Saunders practical thematic analysis. ResultsDespite variation in personal narratives, a common trajectory emerged: participants moved away from a biomedical explanatory model towards one centred on nervous system dysregulation. This shift, sometimes following initial scepticism, was often described as a turning point, sparking hope and motivation to engage in self-directed strategies. Recovery was not linear but an iterative process, involving cycles of practice, reflection (especially when progress stagnated) and adaptation of mind-body techniques. Over time, participants gained insights into contributing factors and, in many cases, made intentional life changes to support ongoing recovery. These patterns echo findings from previous research on mind-body approaches in chronic pain and chronic fatigue, and align with neuroscientific perspectives on symptom generation. Most participants navigated this process without formal clinical support, relying instead on online communities and actively avoiding sources of (biomedical) information that conflicted with their new understanding. ConclusionsWhile causal inferences cannot be drawn from qualitative data, this study highlights potential mechanisms that may underpin recovery for people with long covid using mind-body approaches. Further research is needed to develop structured interventions, and to evaluate their efficacy and safety. Future research should also explore how prevailing narratives within healthcare and society influence treatment engagement and recovery trajectories. STRENGTHS AND LIMITATIONS OF THIS STUDYO_LIThis is the first study exploring experiences of recovery from long covid using mind-body approaches. C_LIO_LIIn-depth, real-world accounts capture the lived-experiences over time and allow in-depth exploration if the recovery process, while the semi-structured design facilitates the emergence of themes rarely captured in clinical research. C_LIO_LIGeneralisability is limited due to self-identified long covid status, lack of formal diagnostic verification, absence of strict definitions of mind-body approaches and recovery, and a relatively homogenous sample (mostly highly educated women). C_LI

14
Cardiovascular complications are rare in severe COVID-19 presenting with myocardial injury

O'Brien, C.; Gunturkun, F.; Deng, Y.; Chang, D.; Khanijo, S.; Barnett, C.; Kapil, S.; Daly, A.; Goffi, A.; Stavi, D.; Carrier, F. M.; Ennehas, Y.; Fiza, B.; Patterson, A.; Daubert, T.; Moll, V.; Castellucci, C.; Zekhtser, M.; Bughrara, N.; Mayette, M.; Courchesne, K.; Thompson, S.; Parikh, R.; Kim, J.; Ashland, M.; Mohabir, P.

2026-01-05 infectious diseases 10.64898/2026.01.04.26343417
Top 2%
10× avg
Show abstract

BackgroundSARS-COV2 infection has been linked to cardiovascular complications. Previously published data suggested that cardiac injury may be a common disease feature and contribute to morbidity and mortality in severe COVID-19. Since the early days of the pandemic, little data has emerged describing what cardiovascular complications arise from myocardial injury in severe COVID-19 and whether these complications contribute to mortality. ObjectivesDescribe the incidence and nature of cardiovascular complications in patients with severe COVID-19 infection and biochemical or imaging evidence of myocardial injury. MethodsWe analyzed consecutive patients admitted for severe COVID-19 and presenting with biochemical or echocardiographic evidence of myocardial injury across 16 centers within the CARDIO-COVID investigator-initiated consortium. A light-GBM machine learning model was used to identify variables associated with mortality. ResultsOur study included 1,328 patients. In hospital mortality was 29.4%. Factors most associated with mortality were age, SOFA Cardiovascular score, elevated troponin, intubation, high vasoactive-inotropic score (VIS), high PEEP, and high FiO2. Cardiovascular complications were rare and did not present significant association with mortality. Patients with high VIS and SOFA Cardiovascular scores had a significantly higher incidence of secondary bacterial pneumonias suggesting septic shock played a role in their physiology. Investigator-adjudicated cause of death data identified respiratory failure and septic shock as primary causes of mortality. ConclusionsCardiovascular complications of severe COVID-19 are uncommon and do not seem to contribute significantly to mortality. Biochemical markers of myocardial injury or strain still carry significant prognostic value, but the value of these markers may indicate presence of high-risk ARDS phenotypes as opposed to significant myocardial injury.

15
Extending the OMOP Common Data Model to Support Observational Peripheral Vascular Disease Research

Leese, P. J.; McIntee, T.; Browder, S. E.; Laivuori, M.; Alabi, O.; McGinigle, K. L.

2026-02-03 health informatics 10.64898/2026.02.01.26345276
Top 2%
10× avg
Show abstract

BackgroundPeripheral artery disease (PAD) and chronic limb-threatening ischemia (CLTI) cause substantial morbidity and mortality, yet research progress is limited by fragmented, non-standardized data. The Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) provides a standardized framework for electronic health record (EHR) research but lacks domain-specific detail for peripheral vascular diseases. This study aimed to develop and test a vascular-specific OMOP CDM extension to improve data standardization, enable reproducible real-world analyses, and support precision medicine research in PAD and CLTI. MethodsWe identified patients with PAD, CLTI, or diabetic foot ulcers who sought care within the UNC Health System between April 2014 and July 2024. Standard OMOP tables were supplemented with peripheral vascular laboratory (PVL) data and state death records. Intermediate tables were designed for key clinical domains (e.g., smoking, comorbidities, revascularizations) to enhance reusability. Predictive models for revascularization and mortality were developed using logistic regression with Bayesian weighting and Markov Chain Monte Carlo feature selection. Clinical ApplicationThe revascularization model displayed high performance with and without important vascular variables (AUC = 0.970 and AUC 0.969, respectively), while the mortality model demonstrated moderate accuracy (AUC = 0.656) that improved with inclusion of vascular-specific features (AUC = 0.752). ConclusionsThis vascular OMOP extension represents one of the first specialty-specific frameworks for peripheral vascular research. By extending the OMOP CDM to a vascular domain, this work advances both the technical framework and scientific capability of real-world data research in limb preservation and precision vascular medicine.

16
Occupational and Environmental Challenges and Effects of COVID-19 Testing Implementation Experienced by HIV Viral Load Laboratory Staff within a Public Health Sector Laboratory in South Africa

Sarang, S.; Matingo-Mutava, E.; Cassim, N.

2026-02-22 occupational and environmental health 10.64898/2026.02.16.26346134
Top 2%
10× avg
Show abstract

BackgroundThe COVID-19 pandemic required South African public sector HIV viral load (VL) laboratories to scale up Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) testing while maintaining essential HIV services. This placed additional pressure on diagnostic services. This dual mandate introduced significant occupational and environmental challenges (OEC) for staff that remain underexplored. ObjectiveThis study aimed to investigate the OEC and effects that staff experienced during the implementation of COVID-19 testing at public sector VL laboratories in South Africa. MethodsA quantitative, cross-sectional study utilised a census approach among technical and support staff. Data were collected via a structured REDCap questionnaire using 5-point Likert scales. Pre- and post-implementation challenges were assessed across four domains: workload, environmental conditions (space, ventilation, waste), communication, and PPE availability. Statistical analyses included the Wilcoxon Signed-Rank and Spearmans correlation tests. ResultsPerceived occupational challenges increased significantly across all domains post-implementation. Staff workload saw the highest rise (mean score 3.02 to 3.53). Adverse health effects were pervasive; 80.2% of staff reported burnout/fatigue, and 76.5% reported increased anxiety/stress. A strong positive correlation was observed between post-COVID-19 challenges and adverse mental and physical health outcomes (rho = 0.449, p < 0.001). Furthermore, 35.8% of staff considered resigning due to increased job demands. ConclusionIntegrating COVID-19 testing exacerbated systemic weaknesses, causing measurable psychological injury and threatening workforce retention. Findings suggest that the diagnostic workforce requires formal crisis surge staffing models and institutionalised mental health support to safeguard personnel and maintain essential services during future health emergencies.

17
Evaluation of short-term multi-target respiratory forecasts over winter 2024-25 in England using sub-ensemble contribution analyses

Kennedy, J. C.; Furguson, W.; Jones, O.; Ward, T.; Riley, S.; Tang, M. L.; Mellor, J.

2026-02-18 infectious diseases 10.64898/2026.02.12.26346156
Top 2%
10.0× avg
Show abstract

BackgroundEpidemic forecasting research often assesses ensembles and their component models using probabilistic scoring rules. Quantifying how individual models affect ensemble performance is challenging, particularly across multiple targets and spatial scales. MethodsWe present Winter 2024-25 forecasts of Influenza and COVID-19 hospital admissions in England and conduct a retrospective simulation using the operational component models. Forecasts were scored using the per capita weighted interval score (pcWIS) for counts and the ranked probability score (RPS) for ordinal trend direction. We compared operational retrospective forecasts, used generalised additive models (GAMs) to estimate the expected change in score from the inclusion of a model in a sub-ensemble, and used Pareto analysis to understand which sub-ensembles were Pareto-optimal across scoring rules. ResultsNationally, the Influenza and COVID-19 operational ensembles achieved pcWIS of 5.20 x 10-7 and 3.98x 10-7, with RPS of 0.234 and 0.171 respectively. This corresponds to a 47% improvement in score versus sub-ensembles for Influenza pcWIS. However, Influenza operational ensembles were 22% worse than sub-ensembles, on average, when measured by RPS. For COVID-10, operational ensembles were 43% and 265% worse on average, than retrospective sub-ensembles by pcWIS and RPS, respectively. The sub-ensemble simulation showed individual models influenced the ensembles during different epidemic phases. The Pareto analysis demonstrated that there can be a trade-off between relative direction and absolute count score optimisation. InterpretationOur analysis shows that UKHSA forecasts were well calibrated with observations and often had comparable performance to optimal ensembles. Our GAM and Pareto analyses inform model selection for future ensembles. Author SummaryForecasts of winter hospital pressures in England are an important tool for senior healthcare leaders. It is common practice to produce a forecasting ensemble, i.e. combine the predictions of multiple models to create a single, more accurate prediction. Forecasting teams should strive to produce the best forecast possible; one tool for this is retrospective evaluation over a forecasting season using proper scoring rules to assess performance. Our forecasts are constructed of two components, an epidemic trend direction estimate as well as forecast of hospital admission numbers. There are two main challenges we address. The first is understanding at which epidemic phase different ensemble contributions are most effective, the second is the joint optimisation of an ensemble for both trend direction and admission numbers forecast. We apply these methods to a variety of ensembles (sub-ensembles) based on our own modelling suite, and compare the sub-ensembles to our operational forecasts from the Winter 2024/25 season.

18
Map Liberator: An open-source tool for recovering spatial epidemiological data from static situation reports

Simons, D.

2026-01-27 epidemiology 10.64898/2026.01.26.26344575
Top 2%
9.9× avg
Show abstract

BackgroundMuch of the worlds historical and current epidemiological data remains locked in static formats, such as PDF situation reports or image-based surveillance bulletins. Recovering this data for spatial analysis typically requires proprietary software (e.g., ArcGIS) or laborious manual entry, which is prone to transcription errors. MethodsI developed Map Liberator, an open-source application built in R and Shiny. It provides a split-screen digitisation interface that allows users to overlay static reference maps alongside interactive administrative boundaries. The tool uses a stateful rendering engine to manage complex geometries (up to Admin Level 3) and facilitates the extraction of binary, numeric, or qualitative data directly into a structured, machine-readable format. ResultsI demonstrate the tools utility by digitizing Lassa Fever surveillance data from Nigeria (2018-2025). The tool successfully aligned static situation reports with official Local Government Area (LGA) boundaries, allowing for the rapid recovery of binary presence/absence data that was previously inaccessible for computational modelling. ConclusionsMap Liberator provides a low-barrier, cost-effective solution for field epidemiologists and researchers to liberate spatial data from static reports, supporting the principles of FAIR (Findable, Accessible, Interoperable, Reusable) data in global health.

19
Cultryx: Precision Diagnostic Stewardship for Blood Cultures Using Machine Learning

Marshall, N. P.; Chen, W.; Amrollahi, F.; Nateghi Haredasht, F.; Maddali, M. V.; Ma, S. P.; Zahedivash, A.; Black, K. C.; Chang, A.; Deresinski, S. C.; Goldstein, M. K.; Asch, S. M.; Banaei, N.; Chen, J. H.

2026-03-04 infectious diseases 10.64898/2026.02.27.26347214
Top 2%
9.7× avg
Show abstract

BackgroundThe 2024 blood culture bottle shortage brought diagnostic resource allocation to the forefront, reflecting persistent, foundational challenges with low-value testing and empiric treatment approaches under clinical uncertainty. ObjectiveTo determine whether a machine learning approach using electronic medical record data can predict bacteremia more effectively than existing systems and practices to guide diagnostic testing and empiric treatment strategies. MethodsIn a retrospective cohort of 101,812 adult emergency department encounters (2015-2025), we first established an idealized cognitive baseline by evaluating physician and generative AI (GPT-5) application of the professional society-endorsed Fabre framework on a validation subset. We then trained an XGBoost model (Cultryx) on the full cohort to predict bacteremia, benchmarking its performance against real-world clinical heuristics (SIRS, Shapiro Rule). ResultsFor the idealized baseline, physicians applying the Fabre framework achieved 95.7% sensitivity, but GPT-5 automation failed to replicate this standard (71.6% sensitivity). In real-world benchmarking, Cultryx outperformed all clinical heuristics (AUROC 0.810). SIRS lacked specificity (41.2%), driving diagnostic overuse, while the Shapiro Rule lacked sensitivity (70.2%), missing ~30% of bacteremia cases. In contrast, when calibrated to a strict 95% sensitivity target, Cultryx achieved the highest culture volume deferral rate (26.2%, deferring ~ 15,872 bottles with predicted negative results) while maintaining a 98.9% negative predictive value. Cultryxscore, a simplified bedside tool, retained a 20.8% deferral rate. ConclusionsMachine learning provides a superior, data-driven alternative to mainstream clinical heuristics for predicting bacteremia. By maximizing culture deferment without compromising pathogen detection, Cultryx can conserve diagnostic resources, reduce unnecessary empiric antibiotic exposure, and systematically elevate patient safety. SummaryCultryx, a machine learning model for blood culture stewardship, outperforms standard clinical heuristics in predicting bacteremia. This approach could reduce culture utilization by over 26% while preserving pathogen detection, conserving diagnostic resources, reducing unnecessary antibiotic exposure, and elevating patient safety.

20
Antibiotic coverage in biliary-stented pancreatoduodenectomy: Real-world evidence supporting piperacillin tazobactam over ampicillin sulbactam

Lettner, J. D.; Matskevich, P.; Focke, C.; Chikhladze, S.; Fichtner-Feigl, S.; Utzolino, S.; Ruess, D. A.

2026-02-14 infectious diseases 10.64898/2026.02.12.26346173
Top 2%
9.7× avg
Show abstract

BackgroundPreoperative biliary stenting alters biliary colonization and may reduce the effectiveness of perioperative antibiotic prophylaxis in pancreatoduodenectomy. Although broader-spectrum regimens have been associated with improved infectious outcomes, their microbiological adequacy in routine clinical practice remains poorly defined. We therefore evaluated the real-world adequacy of a prolonged ampicillin-sulbactam protocol, its association with infectious outcomes and survival, and the potential impact of a universal piperacillin-tazobactam strategy. MethodsWe analyzed all consecutive patients who underwent elective pancreatoduodenectomy from 2002 to 2023 at our tertiary center. Demographic, operative, microbiological, and outcome data were retrieved from a prospectively maintained database. Patients were stratified by stent status. Adequacy of prophylaxis was defined as the full in vitro susceptibility of all bile isolates. The outcomes included 30-day infectious morbidity, clinically relevant POPF, PPH, DGE, reoperation, 30- and 90-day mortality and long-term survival. A coverage simulation was performed to compare ampicillin-sulbactam with a hypothetical universal piperacillin-tazobactam. Statistical methods included chi-square/Fishers exact tests, Mann-Whitney U tests, Cox models, McNemars test and Poisson regression. ResultsOf 956 patients, 424 (44%) had a biliary stent. Technical complications were comparable between groups, and rates of POPF and PPH were not increased. However, infectious morbidity was higher in stented patients, including sepsis (RR 1.62, 95% CI 1.05-2.51) and postoperative cholangitis (RR 2.20, 95% CI 1.36-3.56). Thirty- and 90-day mortality were increased (RR 2.88 and 2.73) but lost significance after adjustment. Bile cultures predominantly yielded Enterococcus and Enterobacterales with low ampicillin-sulbactam susceptibility. Overall adequacy was 21.7%. Among patients with bile cultures (n = 474), ampicillin-sulbactam covered 43.7% (207/474) versus 81.2% (385/474) with piperacillin-tazobactam; in stented patients with cultures (n = 397), coverage increased from 41.8% to 78.1%. Adequate ampicillin-sulbactam coverage was not associated with reduced infectious outcomes in Poisson models. ConclusionPreoperative stenting creates a polymicrobial, partially resistant biliary niche that ampicillin-sulbactam does not sufficiently cover. Our data shows that a piperacillin-tazobactam strategy substantially improves coverage and was therefore implemented at our center. Core message- Stented patients exhibit a distinct infectious risk profile characterized by Enterococcus-and Enterobacterales-dominated bile colonization rather than increased rates of technical complications. - In stented patients, real-world microbiological coverage of ampicillin-sulbactam was limited, and in vitro susceptibility did not independently translate into reduced postoperative infectious morbidity. - Broader prophylaxis, such as piperacillin/tazobactam, aligns with the actual flora and nearly doubles theoretical coverage, addressing the mismatch between stent-associated biofilms and narrow regimens.